这项工作介绍了Seleseet,这是一种新的大型多标签土地覆盖物和土地使用场景的理解数据集。 It includes $1\,759\,830$ images from Sentinel-2 tiles, with 12 spectral bands and patch sizes of up to $ 120 \ \mathrm{px} \times 120 \ \mathrm{px}$.每张图像都带有来自德国土地覆盖型LBM-DE2018的大型像素级标签,其土地覆盖类别基于Corine Land Cover数据库(CLC)2018,而最小映射单元(MMU)的五倍比原始CLC映射小五倍。 。我们提供了所有四个季节的像素同步示例,以及额外的雪套装。这些属性使Seasonet成为当前最广泛,最大的遥感场景理解数据集,其应用程序从土地覆盖地图上的场景分类到基于内容的跨季节图像检索和自我审议的功能学习。我们通过评估场景分类和语义分割方案中新数据集中的最新深层网络来提供基线结果。
translated by 谷歌翻译
State-of-the-art performance in electroencephalography (EEG) decoding tasks is currently often achieved with either Deep-Learning or Riemannian-Geometry-based decoders. Recently, there is growing interest in Deep Riemannian Networks (DRNs) possibly combining the advantages of both previous classes of methods. However, there are still a range of topics where additional insight is needed to pave the way for a more widespread application of DRNs in EEG. These include architecture design questions such as network size and end-to-end ability as well as model training questions. How these factors affect model performance has not been explored. Additionally, it is not clear how the data within these networks is transformed, and whether this would correlate with traditional EEG decoding. Our study aims to lay the groundwork in the area of these topics through the analysis of DRNs for EEG with a wide range of hyperparameters. Networks were tested on two public EEG datasets and compared with state-of-the-art ConvNets. Here we propose end-to-end EEG SPDNet (EE(G)-SPDNet), and we show that this wide, end-to-end DRN can outperform the ConvNets, and in doing so use physiologically plausible frequency regions. We also show that the end-to-end approach learns more complex filters than traditional band-pass filters targeting the classical alpha, beta, and gamma frequency bands of the EEG, and that performance can benefit from channel specific filtering approaches. Additionally, architectural analysis revealed areas for further improvement due to the possible loss of Riemannian specific information throughout the network. Our study thus shows how to design and train DRNs to infer task-related information from the raw EEG without the need of handcrafted filterbanks and highlights the potential of end-to-end DRNs such as EE(G)-SPDNet for high-performance EEG decoding.
translated by 谷歌翻译
A learned system uses machine learning (ML) internally to improve performance. We can expect such systems to be vulnerable to some adversarial-ML attacks. Often, the learned component is shared between mutually-distrusting users or processes, much like microarchitectural resources such as caches, potentially giving rise to highly-realistic attacker models. However, compared to attacks on other ML-based systems, attackers face a level of indirection as they cannot interact directly with the learned model. Additionally, the difference between the attack surface of learned and non-learned versions of the same system is often subtle. These factors obfuscate the de-facto risks that the incorporation of ML carries. We analyze the root causes of potentially-increased attack surface in learned systems and develop a framework for identifying vulnerabilities that stem from the use of ML. We apply our framework to a broad set of learned systems under active development. To empirically validate the many vulnerabilities surfaced by our framework, we choose 3 of them and implement and evaluate exploits against prominent learned-system instances. We show that the use of ML caused leakage of past queries in a database, enabled a poisoning attack that causes exponential memory blowup in an index structure and crashes it in seconds, and enabled index users to snoop on each others' key distributions by timing queries over their own keys. We find that adversarial ML is a universal threat against learned systems, point to open research gaps in our understanding of learned-systems security, and conclude by discussing mitigations, while noting that data leakage is inherent in systems whose learned component is shared between multiple parties.
translated by 谷歌翻译
Cutting planes are a crucial component of state-of-the-art mixed-integer programming solvers, with the choice of which subset of cuts to add being vital for solver performance. We propose new distance-based measures to qualify the value of a cut by quantifying the extent to which it separates relevant parts of the relaxed feasible set. For this purpose, we use the analytic centers of the relaxation polytope or of its optimal face, as well as alternative optimal solutions of the linear programming relaxation. We assess the impact of the choice of distance measure on root node performance and throughout the whole branch-and-bound tree, comparing our measures against those prevalent in the literature. Finally, by a multi-output regression, we predict the relative performance of each measure, using static features readily available before the separation process. Our results indicate that analytic center-based methods help to significantly reduce the number of branch-and-bound nodes needed to explore the search space and that our multiregression approach can further improve on any individual method.
translated by 谷歌翻译
State-of-the-art image and text classification models, such as Convectional Neural Networks and Transformers, have long been able to classify their respective unimodal reasoning satisfactorily with accuracy close to or exceeding human accuracy. However, images embedded with text, such as hateful memes, are hard to classify using unimodal reasoning when difficult examples, such as benign confounders, are incorporated into the data set. We attempt to generate more labeled memes in addition to the Hateful Memes data set from Facebook AI, based on the framework of a winning team from the Hateful Meme Challenge. To increase the number of labeled memes, we explore semi-supervised learning using pseudo-labels for newly introduced, unlabeled memes gathered from the Memotion Dataset 7K. We find that the semi-supervised learning task on unlabeled data required human intervention and filtering and that adding a limited amount of new data yields no extra classification performance.
translated by 谷歌翻译
Powerful hardware services and software libraries are vital tools for quickly and affordably designing, testing, and executing quantum algorithms. A robust large-scale study of how the performance of these platforms scales with the number of qubits is key to providing quantum solutions to challenging industry problems. Such an evaluation is difficult owing to the availability and price of physical quantum processing units. This work benchmarks the runtime and accuracy for a representative sample of specialized high-performance simulated and physical quantum processing units. Results show the QMware cloud computing service can reduce the runtime for executing a quantum circuit by up to 78% compared to the next fastest option for algorithms with fewer than 27 qubits. The AWS SV1 simulator offers a runtime advantage for larger circuits, up to the maximum 34 qubits available with SV1. Beyond this limit, QMware provides the ability to execute circuits as large as 40 qubits. Physical quantum devices, such as Rigetti's Aspen-M2, can provide an exponential runtime advantage for circuits with more than 30. However, the high financial cost of physical quantum processing units presents a serious barrier to practical use. Moreover, of the four quantum devices tested, only IonQ's Harmony achieves high fidelity with more than four qubits. This study paves the way to understanding the optimal combination of available software and hardware for executing practical quantum algorithms.
translated by 谷歌翻译
Current technological advances open up new opportunities for bringing human-machine interaction to a new level of human-centered cooperation. In this context, a key issue is the semantic understanding of the environment in order to enable mobile robots more complex interactions and a facilitated communication with humans. Prerequisites are the vision-based registration of semantic objects and humans, where the latter are further analyzed for potential interaction partners. Despite significant research achievements, the reliable and fast registration of semantic information still remains a challenging task for mobile robots in real-world scenarios. In this paper, we present a vision-based system for mobile assistive robots to enable a semantic-aware environment perception without additional a-priori knowledge. We deploy our system on a mobile humanoid robot that enables us to test our methods in real-world applications.
translated by 谷歌翻译
尽管在最近的研究中,冷水珊瑚的分布模式(例如paragorgia achorea)受到了越来越多的关注,但对它们的原位活性模式知之甚少。在本文中,我们使用机器学习技术检查了灰木杆菌中的息肉活动,以分析从挪威Stjernsund部署的自主登录机群集获得的高分辨率时间序列数据和照片。本文得出的模型的互动说明是作为补充材料提供的。我们发现,珊瑚息肉扩展程度的最佳预测指标是当前方向,滞后为三个小时。与水流无直接相关的其他变量(例如温度和盐度)提供了更少的有关息肉活动的信息。有趣的是,可以通过对测量位点上方的水柱中的层流进行采样,而不是通过对珊瑚的直接流中的更湍流流进行采样。我们的结果表明,灰木息肉的活性模式受Stjernsund的强潮流状态的控制。看来,木托氏菌对环境当前状态的较短变化没有反应,而是根据潮汐周期本身的大规模模式来调整其行为,以优化营养的吸收。
translated by 谷歌翻译
有限的公开数据可以支持恶意软件分析技术的研究。特别是,几乎没有由杜鹃/斗篷等丰富的沙盒生成的公开可用数据集。使用动态沙箱的好处是对目标机中文件执行的逼真模拟并获得该执行日志。机器可以被恶意软件感染,因此很有可能在执行日志中捕获恶意行为,从而使研究人员可以详细研究这种行为。尽管随后对日志信息的分析在工业网络安全后端被广泛介绍,但据我们所知,仅在学术界投入了有限的努力,以使用最先进的技术提高此类日志分析功能。我们使此示例数据集可用来支持设计新的机器学习方法以进行恶意软件检测,尤其是用于自动检测通用恶意行为。该数据集是在Avast软件和捷克技术大学-AI中心(AIC)之间合作的。
translated by 谷歌翻译
机器学习的进步(ML)引起了人们对这项技术支持决策的浓厚兴趣。尽管复杂的ML模型提供的预测通常比传统工具的预测更准确,但这种模型通常隐藏了用户预测背后的推理,这可能导致采用和缺乏洞察力。在这种张力的激励下,研究提出了可解释的人工智能(XAI)技术,这些技术发现了ML发现的模式。尽管ML和XAI都有很高的希望,但几乎没有经验证据表明传统企业的好处。为此,我们分析了220,185家能源零售商的客户的数据,预测具有多达86%正确性的交叉购买(AUC),并表明XAI方法的Shap提供了为实际买家提供的解释。我们进一步概述了信息系统,XAI和关系营销中的研究的影响。
translated by 谷歌翻译